![]() METHOD AND SYSTEM TO PROVIDE GUIDANCE FOR CARRYING OUT A TASK
专利摘要:
A cognitive assistant is described that allows a maintainer to talk to an application using natural language. The maintainer can quickly interact with a hands-free application without having to use complex user interfaces or memorized voice commands. The assistant provides instructions to the maintainer using augmented reality audio and visual cues. The assistant will route the maintainer through maintenance tasks and verify proper execution using iot sensors. If after completing a step the IOT sensors are not as expected, the maintainer is notified of how to remedy the situation. 公开号:BR102017025093A2 申请号:R102017025093-8 申请日:2017-11-23 公开日:2018-06-26 发明作者:J. Wood William;H. Boyd Mark;K. Lorang Melanie;H. Kusuda David 申请人:The Boeing Company; IPC主号:
专利说明:
(54) Title: METHOD AND SYSTEM TO PROVIDE GUIDANCE TO EXECUTE A TASK (51) Int. Cl .: G06Q 10/10 (30) Unionist Priority: 12/09/2016 US 62 / 432,495, 12/09/2016 US 62 / 432, 49513/07/2017 US 15 / 649,382 (73) Holder (s): THE BOEING COMPANY (72) Inventor (s): WILLIAM J. WOOD; MARK H BOYD; MELANIE K. LORANG; DAVID H. KUSUDA (74) Attorney (s): KASZNAR LEONARDOS INTELLECTUAL PROPERTY (57) Summary: A cognitive assistant is described that allows a maintainer to speak to an application using natural language. The maintainer can quickly interact with a hands-free application without the need to use complex user interfaces, or memorized voice commands. The assistant provides instructions to the maintainer using augmented reality visual and audio cues. The assistant will guide the maintainer through maintenance tasks and verify proper execution using loT sensors. If after completing a step the loT sensors are not as expected, the maintainer is notified of how to resolve the situation. 1/44 “METHOD AND SYSTEM TO PROVIDE GUIDANCE TO EXECUTE A TASK” FUNDAMENTALS Field [001] The present description refers to systems and methods for producing and maintaining physical structures and, in particular, a system and method for providing guidance to users to perform tasks on physical structures. Description of Related Technique [002] The traditional maintenance activity uses paper-based maintenance manuals that the user (maintainer) references to look for suitable maintenance tasks and follow each step of those tasks. As a consequence, the maintainer must locate the paper maintenance manual, find the appropriate portion of the manual related to the desired maintenance task, determine what tools and other resources are needed (usually located elsewhere) and obtain those tools and resources. The maintainer must then refer to the paper maintenance manual for each step, perform the step, and then refer back to the paper maintenance manual to determine the next step. Since the maintainer must constantly stop work to reference the instructions in the manual for each step, it extends the time needed to perform the task and increases the chances of errors on the part of the maintainer. In short, paper-based maintenance manuals must be stored and retrieved, and cannot be easily searched and consulted, and require the maintainer to move between the maintenance task and the associated manual, delaying the maintenance process. [003] Although digital maintenance manuals exist and alleviate some of these disadvantages, making the manual portable and searchable, the maintainer must still have hands free to interact with the digital maintenance manual. A digital maintenance manual can include a video, but Petition 870170090358, of 11/23/2017, p. 121/178 2/44 these videos are usually played faster or slower than the maintainer's ability to perform the steps, so the maintainer must constantly start and restart (or reverse) the video playback. [004] Systems are available that improve these digital maintenance manuals. For example, augmented reality has been used to guide maintainers through maintenance steps using augmented reality glasses. Such a system was proposed by Bavarian Motor Works, as described in “End of the Mechanic BMW Glasses Make it Possible for Anyone to Spot and Fix a Car Engine Fault Just by Looking at It ”(Mechanic's end BMW's smart glasses make it possible for anyone to locate and repair a car engine failure just by watching it” ), by Victoria Woollaston, of the Daily Mail, published on January 21, 2014, which is also incorporated by reference. An example of a similar system is described in “Augmented Reality for Maintenance and Repair (ARMAR)”, by Steve Henderson and Steven Feiner, Columbia University Computer Graphics and User Interfaces Uab, 2016, also incorporated here for reference. This describes the use of computer graphics in real time, overlaid and recorded with the actual repaired equipment, to improve the productivity, accuracy and safety of maintenance personnel, through use on the head of motion displays that enhance the user’s physical view of the system with information such as subcomponent labeling, guided maintenance steps, real-time diagnostic data, and safety warnings. Such systems may use smart helmets, such as those available from DAQRI, described in “Daqri Smart Helmet”, by Brian Barrett, Wired, January 7, 2016, also incorporated here for reference. [005] Unfortunately, such existing systems do not solve many Petition 870170090358, of 11/23/2017, p. 122/178 3/44 maintenance problems. For example, user interaction with the BMW system involves voice interaction limited to the maintainer who provides verbal commands, such as "next step". The system does not determine the task to be performed or the steps for that task using ordinary user language requests. Consequently, for a given problem, the maintainer must still determine which task to perform (for example, there is no diagnosis or any characteristic that allows the user an objective to be achieved in ordinary language to find the appropriate task or task steps. The systems ARMAR and DAQRI are equally deficient in this regard. [006] The previous systems also fail to monitor the performance of the stages (to provide feedback that confirms that the stage was performed correctly, or that indicates that no corrective action was provided) or to provide data that record the performance of the stage. For example, the step to be performed can be to tighten a nut to a bolt to a particular torque. [007] With regard to monitoring the performance of the steps to provide feedback, none of the previous systems detects whether the maintainer has failed to tighten the nut to the appropriate specification, whether the maintainer is using the appropriate tool, whether the user has failed to align the nut on the bolt threads before tightening it. Such errors are unlikely to be discovered in a timely manner and, if discovered, would waste time, as they may require disassembly or execution of the task steps in reverse order to allow the error to be corrected. In addition, none of the previous systems can detect whether the step is being performed correctly while the user is executing the step and, thus, avoid damage to any of the tools used or to the structure being worked on. [008] Regarding registration, none of the previous systems Petition 870170090358, of 11/23/2017, p. 123/178 4/44 records any data regarding the performance of the stage. Such data can be used to further improve maintenance procedures or to estimate how long such procedures should take. The data can also be used to identify causes of subsequent failures (for example, subsequent failures were more common when a nut from a specific vendor was torqueed to the top end of a torque specification). [009] Therefore, it would be desirable to have a method and apparatus that takes into account at least some of the issues discussed above, as well as other possible issues. SUMMARY [0010] To meet the requirements described above, this document describes a system and method for providing guidance for performing a task with at least one step performed on a physical structure at a station. In one embodiment, the method comprises receiving, in a guidance processing unit, a command from a performance entity, the command invoking a task; determine, in the guidance processing unit, at least one step from the command; transmit, from the guidance processing unit to the performance entity, instruction data that illustrates the performance of at least one step; receive, in the guidance processing unit, real-time sensor data generated by a sensor close to the physical structure that detects step performance and; compute a step performance measure according to the sensor data. The system and method can make maintenance operations more efficient, such as making maintenance operations completely hands-free, transporting information as a combination of 3D audio and visual indications, through an augmented reality environment and therefore reducing errors, rework, security risks, and delays in maintenance, repair and service Petition 870170090358, of 11/23/2017, p. 124/178 5/44 aircraft or other products where complex maintenance operations are performed. The system and method can also reduce training and reduce execution costs for maintenance, repair and service of aircraft or other products where complex maintenance operations are performed. [0011] Another modality is evidenced by a system to provide guidance to perform a task with at least one step performed on a physical structure in a station, in which the system comprises a sensor close to the physical structure, a display device and a unit guidance processing method which comprises a processor communicatively coupled to a memory which stores instructions which comprises instructions. The instructions include instructions for receiving a command from a performance entity, the command invoking the task, determining at least one step from the command, transmitting instruction data that illustrates the performance of at least one step to the performance entity. performance for presentation by the presentation device, receive real-time sensor data generated by the sensor close to the physical structure, which detects step performance, and compute a step performance measure according to the sensor data. The system can make maintenance operations more efficient, such as making maintenance operations completely hands-free, transporting information as a combination of 3D audio and visual indications through an augmented reality environment and therefore reducing errors, rework , safety risks and delays in the maintenance, repair and service of aircraft or other products on which complex maintenance operations are performed. The system and method can also reduce training and reduce execution costs for maintenance, repair and service of aircraft or other products where complex maintenance operations are performed. [0012] The characteristics, functions and advantages that have been discussed can be achieved independently in various modalities of Petition 870170090358, of 11/23/2017, p. 125/178 6/44 present invention or can be combined in still other modalities, whose additional details can be seen with reference to the description and drawings below. BRIEF DESCRIPTION OF THE DRAWINGS [0013] Referring now to the drawings in which similar reference numbers represent corresponding parts across all of them: FIG. 1 is a diagram showing an exemplary maintenance / assembly / test installation; FIG. 2 presents a functional block diagram of a station modality; FIGS. 3A-3C are diagrams that depict a modality of exemplary process steps, which can be used to guide users in completing tasks; FIG. 4 is a diagram illustrating the operation of the guidance processing unit with other elements of the stations and the central processor; FIGS. 5A and 5B are diagrams illustrating an exemplary test bed implementation of the guidance processor and related elements of the station; FIG. 6 is a diagram illustrating an exemplary computer system, which could be used to implement processing elements of the geolocation system. DESCRIPTION [0014] In the description below, reference is made to the attached drawings that form a part of it, and in which various modalities are shown by way of illustration. It is understood that other modalities can be used, and that structural changes can be made without departing from the scope of the present description. Petition 870170090358, of 11/23/2017, p. 126/178 7/44 Overview [0015] The system and method described here improves the performance and efficiency of a maintainer of a platform or other physical structure, helping the maintainer to locate correct maintenance procedures, guiding the maintainer through each maintenance step using audio and visual queues and validating the correction of each step through the use of sensor feedback. In one mode, the sensors are integrated through the Internet of Things (IoT). The system provides the ability to: (1) use visual sensors and other IoT sensors to perform initial diagnostics to select the appropriate order of work for the maintainer, (2) automatically collect images and data associated with maintenance operations to provide a track of sensor-validated audit rather than relying on manual maintainer input; (3) automatically collect performance and cycle time metrics that show how long maintenance steps take to identify process improvement opportunities, including maintainer training opportunities. In one embodiment, the system operates under voice control using natural language to issue commands and control the maintenance environment. For example, the operator can audibly request that a light be turned on, instead of reaching for a light switch. [0016] The system and method add several cognitive capabilities to a new maintenance tool, which includes speech to text, text to speech, natural language processing, machine learning and augmented reality. These capabilities allow the maintainer to interact with the maintenance tool using natural spoken commands, without the need to memorize exact voice commands. The tool also leverages natural language processing and machine learning to determine the intent of voice commands, and reacts accordingly Petition 870170090358, of 11/23/2017, p. 127/178 8/44 with such commands. Feedback from the tool is presented to the maintainer using hands-free augmented reality, providing natural language audio commands and 3D visual information overlaid on real-world objects. [0017] The tool adds several different capabilities to make a final system more powerful than each individual component. In one embodiment, the tool combines IoT, cognitive natural language processing (Cognitive Natural Language Processing) and advanced document indexing and querying. This allows the maintainer to easily access all the knowledge necessary to perform maintenance quickly and effectively. The tool also makes maintenance operations completely hands-free by transmitting information as a combination of 3D audio and visual indications, through an augmented reality environment. The tool also adds cognitive capabilities and IoT feedback to existing maintenance tools that would otherwise require mostly manual and unverified maintenance steps. The addition of cognitive capabilities allows the maintainer to find relevant maintenance information quickly and efficiently, and adding IoT feedback verifies the proper completion of each step, reducing rework. [0018] Although described primarily in terms of performing maintenance tasks, the tool is equally at home in production applications, or anywhere where tasks are performed on physical structures, including manufacturing and quality control. For example, the techniques described below are applicable to the assembly and testing of physical structures, which include automobiles, aircraft, spaceships and vessels. [0019] FIG. 1 is a diagram showing an exemplary maintenance / assembly / test (MAT) 100 installation (hereinafter referred to simply as an installation or MAT 100). The MAT 100 has a Petition 870170090358, of 11/23/2017, p. 128/178 9/44 or more stations 102A-102N (hereinafter alternatively referred to as stations 102), on which tasks are performed. Each station 102 includes a physical structure 106 on which maintenance is performed, parts are assembled / disassembled, or tests are being carried out. Stations 102 may also comprise one or more tools 110A-110N (alternatively referred to hereinafter as tool (s) 110) that are used to perform tasks in physical structure 106. Such tasks may be performed by one or more users, such as as 108P person (s) or 108R robot (s) (hereinafter, alternatively referred to as 108 user). [0020] One or more of the physical structure 106, tools 110 and user 108 include one or more sensors (collectively referred to as sensors 112) that measure or monitor a characteristic of the associated physical structure 106, tools 110, or user 108, respectively. For example, physical structure 106 may include one or more physical structure sensors 112B that sense a characteristic of physical structure 106. This characteristic may include a physical characteristic, such as the position or angle of an appendix with respect to another portion of the structure physics 106, an electrical characteristic, such as a voltage or current measurement in a conductor or physical structure 106, or any other quality measurable by a sensor of any kind. Physical structure sensors 112B may include sensors that are part of the completed assembly of physical structure 106 or physical structure sensors 112B that are attached to physical structure 106 for maintenance or production purposes, and subsequently removed before assembly or maintenance is completed. For example, physical structure 106 may comprise a flight control surface such as a rudder, which includes an integral potentiometer that measures the position of the rudder for navigation and control purposes. In this example, this potentiometer can be used as one of the physical structure sensors 112B of physical structure 106, not only for operational assembly, but also for testing purposes. In Petition 870170090358, of 11/23/2017, p. 129/178 10/44 other modalities, other physical structure sensors 112B can be connected to the physical structure to perform the MAT operation. For example, a separate potentiometer can be attached to the rudder, and measurements of the rudder position with this sensor can be compared to the measured rudder position by the integral potentiometer. [0021] Similarly, one or more of the tools 110, each can include one or more sensors 112DA-112DN which are used to sense or measure a characteristic of the associated tool 110A-110N, respectively. This characteristic can also include one or more of a physical characteristic, electrical characteristic or any other quality measured by sensors 112. For example, tool 110A can comprise a torque wrench and sensor 112DA can measure the torque being transmitted over a portion of physical structure 106, such as a bolt or nut, by means of the torque wrench. Such 112D sensors may also include temperature sensors (to monitor the temperature of tool 110 during use). [0022] Likewise, one or more of the 108 users can understand or use one or more 112 sensors. The 108 user (s) can understand a 108P person or a 108R robot, for example. The 108R robot may include one or more 112F robot sensors to sense a characteristic of the 108R robot one or more characteristics of the other elements of station 102A (including physical structure 106, tools 110 or person 108P). In one embodiment, the robot 108R includes a plurality of potentiometers, which provide an indication of the relative position of the structures of the robot 108R and from which the position of the head or work surface of the robot 108R can be determined. This can be used, for example, to determine the position of the working end of the robot 108, as well as any of its structures as a function of time. In another modality, the robot 108R includes a camera or other visual sensor arranged Petition 870170090358, of 11/23/2017, p. 130/178 11/44 at, or near, the working end, so that visual representations of the region surrounding the working end can be obtained. 112F sensors can be integrated with the 108R robot (for example, with sensor measurements being used by the 108R robot to control robot responses to commands) or can be added to the 108R robot only for use at station 102 to perform MAT operations. Such 112F robot sensors may also include temperature sensors (to monitor the temperature of the 108R robot or portions of the 108R robot during use). [0023] As another example, the 108P person can use one or more 112E sensors. Such 112E sensors may include, for example, an augmented reality headset. Such headsets typically comprise a stereoscopic screen mounted on the head (providing separate images for each person's eyes) and head movement tracking sensors. Such motion tracking sensors may include, for example, inertial sensors, such as accelerometers and gyroscopes, structured light systems and eye tracking sensors. When the augmented reality headset is worn by the 108P person, the person can see their surroundings, but stereoscopic images are imposed on those surroundings. Such stereoscopic images may include, for example, portions of physical structure 106 or changes in physical structure 106 requested by task steps. Inertial sensors and eye sensors can be used to determine the direction the user is looking for in inertial space and images of the physical structure overlaid on these images. [0024] Since augmented reality headsets not only record video images but also present video images superimposed on real images, such headsets can be considered not only as sensors, but also presentation elements of the headset augmented reality headset 114B, which presents information to user 108. Station 102A may also include a Petition 870170090358, of 11/23/2017, p. 131/178 12/44 more conventional display, such as a display 114A, for displaying instructional information. [0025] 112E sensors can also include other sensors 112E, such as inertial sensors mounted on appendages, such as accelerometers or gyroscopes, which can measure the inertial state of a person's appendages. In some embodiments, the sensors may include sensors to monitor the 108P person, such as sensors that measure temperature or heart rate. The information provided by such sensors is useful in determining whether the tasks performed by the 108P person are particularly difficult. [0026] Station 102A may also include environmental sensors 112A. Environmental sensors 112A are sensors that measure characteristics of the environment of station 102. This may include, for example, sensors that measure ambient temperature or humidity (for example, using a thermometer and a hygrometer), visible sensors that determine the physical location or the proximity of any of the elements of station 102 to each other, including elements of physical structure 106, tools 110, user 108 or orientation processing unit 104. Environmental sensors 112A may include elements that are arranged on other elements from 102A station. 112A environmental sensors can comprise passive, active or semi-active systems. For example, a modality of an active environmental sensor may comprise a reflector positioned on another element of station 102 (for example, a robot arm 108R), an illuminator that illuminates the reflector, and a visual sensor that measures the position of the illuminated sensor . An example of a passive environmental sensor is a visual sensor, such as a video or still image camera, which can be sensitive to visible, ultraviolet or infrared wavelengths. 112A environmental sensors may also include radio frequency identification (RFID) systems that can be used to Petition 870170090358, of 11/23/2017, p. 132/178 13/44 identify the physical structure 106 and its characteristics. [0027] Any or all of the sensors 112 are communicatively coupled to an orientation processing unit 104 (indicated by the symbols (§)), allowing the orientation processing unit 104 to receive data from the sensors. In addition, the orientation processing unit 104 can be communicatively coupled to the orientation processing units of other stations 102B-102N and to a central processor 120 (as indicated by the symbols (§)). [0028] FIG. 2 is a diagram of a mode of station 102. Station 102 includes orientation processing unit 104, one or more sensors 112, effectors 202, display devices 114 (including display 114A and headset display elements) augmented reality 114B) and physical structure 106. [0029] The orientation processing unit 104 receives sensor data from sensors 112 and in some embodiments, it also provides sensor commands for sensors 112. Such commands may include, for example, commands relating to the resolution or active range of the sensors 112. The orientation processing unit 104 also sends commands and receives data from effectors. Such effectors can include, for example, a stepper motor that controls one of the sensors 112, or the robot 108R. [0030] In the illustrated embodiment, the orientation processing unit 104 includes an interface 206 communicatively coupled to a processor 208. Sensors 112 provide sensor data to processor 208 via interface 206. In addition, sensors 112 can receive commands from from the processor through interface 206. Similarly, processor 208, through interface 206, provides commands for effectors 202 and can also receive data from effectors 202. [0031] The orientation processing unit 104 provides instruction data that illustrates the performance of the steps performed on the physical structure Petition 870170090358, of 11/23/2017, p. 133/178 14/44 to complete tasks for display devices 114, and can also provide commands to control sensors 112 via interface 206. Likewise, display devices 114 can provide commands or information to the guidance processing unit 104 . [0032] The orientation processing unit 104 comprises a processor 208 that is communicatively coupled to one or more memories that store processor instructions that, when executed, cause the orientation processing unit 104 to perform the operations described below . Processor 208 may include several processors 208 and such processors 208 may be located apart from each other. In an embodiment described below, processor 208 comprises distributed processing elements. [0033] FIGS. 3A-3B are diagrams depicting a modality of exemplary process steps, which can be used to guide user 108 in completing tasks that involve one or more steps in physical structures 106. In block 302, user 108, or performance, transmits a command that invokes a task to be performed on the physical structure 106. In block 304, the command is received by the orientation processing unit 104. [0034] In one embodiment, this command is a hands-free command (for example, voice) that is sensed by an audio sensor and provided to the orientation processing unit 104, where the audio command is recognized by a module speech recognition and translated into text. Such voice commands can be in a fixed command language (where the onus is on user 108 to learn the syntax and phrase formation required by the guidance processing unit 104), and natural language (where the onus is on the unit guidance processing 104 to interpret voice commands and Petition 870170090358, of 11/23/2017, p. 134/178 15/44 translate them into a syntax and sentence formation necessary to search for the appropriate task and steps. Fixed command languages can include domain-specific training, carried out by training software components that translate the user's speech into text. [0035] In other embodiments, the command comprises a digital command via a controller device communicatively coupled to the orientation processing unit 104, such as a remote control, computer keyboard, mouse, game controller, or screen display touch. In another mode, the command is sensed by a monitoring system and translated into a digital command. For example, the command can be implemented, totally or partially, using gestures executed by the user 108, sensed by an image sensor (for example, the environment sensor 112A) and provided to the guidance processing unit, where such gestures are analyzed , interpreted and translated into digital commands. [0036] In still other modalities, the command is a digital command received through a system-to-system message from the control system to robot 108R, or other robots at other stations 102, or central processor 120. [0037] Then, the processing processing unit 104 determines one or more steps to be performed from the received command. The commands received can be in many different forms. In one embodiment, the command comprises a generalized objective rather than a specific task. For example, user 108 can issue a command “the air conditioner is not functional”. Given this command, the orientation processing unit 104 determines which problems with the physical structure may be the cause of the air conditioner not being functional. In making this determination, guidance processing unit 104 can accept input from an on-board diagnostic type sensor Petition 870170090358, of 11/23/2017, p. 135/178 16/44 (OBD). The guidance processing unit 104 then determines one or more tasks that respond to the command, and determines at least one step from the given task. For example, in the case of the failed air conditioning example, the guidance processing unit can generate a plurality of tasks, each to check each component of the air conditioning system, as well as a task to diagnose which of the components is defective. Each task can have one or more subtasks hierarchically below each task. At the bottom of the hierarchy of tasks and subtasks are the steps, which represent a suitable activity unit for specific instructions for the user 108. In the preceding example, the step can be to remove a single screw from the air conditioning compressor or to remove a subsystem of the air conditioning compressor. The hierarchical level at which the steps are defined may depend on the complexity of the step and the user experience 108. For example, the voice command can include an indication of how experienced the user is 108 and the hierarchical level of steps defined according to this level of experience. As further defined here, guidance processing unit 104 can store performance data that indicates how well user 108 has performed steps or tasks, and this information can be used to determine user level 108, resulting in steps or instructions appropriate for the user experience level. In this case, user 108 can be determined by user input (for example, by typing or speaking the user's name), through RFID technologies or other means. [0038] In one embodiment, determining the task from the command received comprises generating a database query from the command received using a natural language interpreter. Such interpreters allow users to issue commands in simple conversational language. The words in the command are analyzed, and the words Petition 870170090358, of 11/23/2017, p. 136/178 17/44 analyzed are analyzed for syntax, semantics, speech and speech. The result is a database query in the appropriate language and syntax. This query is provided to a database communicatively coupled to the guidance processing unit (for example, central processor 120), and the task is determined from the result of the database query. Once the task is identified, a query can be generated based on the task, to retrieve the appropriate steps to perform, subject to the user's experience and skills. [0039] In one embodiment, the task is one of a plurality of tasks to be performed on the physical structure 106, and the query of the database is still determined according to the current data of the context. Such context data includes, for example, information about others of the plurality of tasks performed on the physical structure 106 and restrictions on the task imposed by the physical structure itself, by the environment (for example, the position of other physical elements or structures, temperature, humidity , performance limitations of tools 110). For example, a task may have been performed on physical structure 106 which changes the steps necessary to perform another specific task. Going back to the air conditioning system example, another task may have resulted in the removal of a subsystem or part, making it unnecessary to remove that subsystem or part in the current task. Conversely, a step or task may have been performed, which will make additional steps necessary to perform the current task. For example, if a previous task involved pasting a component, the current task may require waiting a period of time for that glue to cure. Performance measurements of previously completed tasks can also be used to change or modify the definition of the steps to be performed on the current task. For example, if a previous task included the step of tightening a nut on a bolt to a particular torque, the measured torque applied to the bolt (a measure of performance from that previous step) could be used to Petition 870170090358, of 11/23/2017, p. 137/178 18/44 estimate the torque required to remove this nut for a subsequent task. [0040] In other modalities, the task can be determined using stored information relevant to the work being performed, such as a work order or other information. The task (and the steps) can also be determined based on restrictions imposed by the state of the physical structure 106 or the environment. For example, a prior task may have disassembled at least a portion of the physical structure 106, in which case the steps necessary to disassemble the portion of the physical structure are not necessary. Conversely, a previous task may have modified physical structure 106 in such a way that additional steps need to be taken. As another example, a step may be necessary to be performed at a given room temperature. If the room temperature is too low, the task may include the step of raising the room temperature to a higher value, a step that would not be necessary if the room temperature was sufficient. As another example, a voltage measured in a potentiometer of the physical structure may depend on the ambient temperature. The 112A room sensors can be used to sense this temperature and determine the appropriate potentiometer configuration based on that room temperature. Other environmental restrictions may include, for example, the location of other elements of station 102, such as the robot arm 108R, tools 110 or other physical structures, since the location of these structures may prevent the dismantling of physical structure 106. In this In this case, environmental sensors can include visible light sensors that sense the location of physical structure 106 and nearby elements. In addition, the environment can include which tools 110 are available at station 102, and which must be retrieved from other locations for the task to be completed. In this case, 112A environmental sensors may include RFID tags on the tools. Petition 870170090358, of 11/23/2017, p. 138/178 19/44 [0041] Returning to FIG. 3A, the guidance processing unit then transmits instruction data illustrating the step (s), as shown in block 308. As shown in block 310, instruction data is received by one or more display devices 114 (which may include display 114A and / or the presentation elements of an augmented reality headset 114B, or a speaker, or other audio sensor (not shown)). In the case of visual display devices, a visual representation of the stage is presented. In one example, the visual representation of the step is shown on display 114A. In another example, the visual representation of the stage is presented in augmented reality through the presentation elements of the augmented reality headset 114B. [0042] In one embodiment, the instruction data that illustrates the stage (s) comprises a visual representation of the stage for presentation in augmented reality through the augmented reality headset. Augmented reality headsets generally comprise a stereoscopic head-mounted display (providing separate images for each eye), two speakers for stereo sound and head movement tracking sensors (which may include gyroscopes, accelerometers and light systems structured). Some augmented reality headsets also have eye tracking sensors. Through the use of head tracking sensors (and optionally, eye), the augmented reality headset is aware of its location and orientation in the space of inertia and provides this information to the orientation processing unit 104. The orientation processing 104 uses this information to determine what user 108 should be seeing and can superimpose other images on the image presented to user 108. For example, if user 108 is looking at physical structure 106, the processing unit orientation 104 can highlight a part Petition 870170090358, of 11/23/2017, p. 139/178 20/44 particular that must be physically manipulated to execute the instructions in the displays provided in the augmented reality headset 114B. User 108 can therefore be specifically informed of what actions must be completed for each part of the physical assembly, and is particularly useful, as the orientation processing unit 104 does the job of combining the illustrated step with background images viewed by the user. user 108. It also eliminates errors, as user 108 is less likely to confuse one part of physical structure 106 with another (for example, the user will not loosen the incorrect pin). Instructional data also typically comprises audio information (for example, a verbal description of the steps to be performed, or the aural representation of what the physical structure should sound during or after performing the step) and presentation elements of the headset. 114B augmented reality typically presents this information using the speakers in the 114B augmented reality headset. In one embodiment, verbal instructions are provided in natural language (for example, common speech in human conversation). Such natural language instructions can be in any language (for example, English, German, Chinese, etc.). Video and / or audio instructions can be provided on a variety of devices, including mobile computing devices, such as cell phones or tablet computers, as well as desktop computing devices. [0043] User 108 receives the instruction data presented illustrating the step and starts executing the step, as shown in block 312. This is normally done by a 108P person, but can also be done by the 108R robot or with the 108P person working together with the 108R robot, with the 108P person and the 108R robot performing their own subset of steps each, or with the 108P person and the 108R robot working together on one or more of the steps. [0044] While the step is being executed, the sensor data, Petition 870170090358, of 11/23/2017, p. 140/178 21/44 that sense the performance of the step, are generated, as shown in block 314. The sensor data is transmitted, as shown in block 316 and received by the orientation processing unit 104, as shown in block 318. The data from sensors are used to monitor the performance of the step, for example, to determine the progress of the step and when and if the step was completed. The sensor data can also be used to determine when user 108 actually started to perform the step (useful later in computing the time it took user 108 to complete the step). The sensor data can also be used to store data that indicates the performance of the step over time. Such data can be useful in diagnosing failures at a later time. [0045] These sensor data can be generated by any one or combination of sensors 112 at station 102. Such sensors 112 can observe: [0046] One or more states of the physical structure 106 on which the task is being carried out: this can be accomplished using physical structure sensors 112B integral or connected to the physical structure itself or environmental sensors 112A. Physical structure sensors 112B and environmental sensors 112A can include visual and / or non-visual sensors. For example, in one embodiment, environmental sensors 112A include visual sensors that visually observe the state of physical structure 106, using object recognition techniques and patterns similar to those used in self-driving cars. Such 112B sensors may include embedded sensors and RFID tags. [0047] One or more states of the performance entity or user 108 who perform the task: 112E sensor (s) to measure such states may include devices used on the head, including audio sensors, image and video sensors, inertial measurement sensors such as gyroscopes and accelerometers, and personal sensors such as frequency monitors cardiac; Petition 870170090358, of 11/23/2017, p. 141/178 22/44 [0048] One or more device states (for example, tools 110, test equipment and parts) used to perform the task: this can be accomplished using 112DA-112DN sensors mounted on or integrated with tools 110, or environmental sensors 112A, in the same way that environmental sensors can observe the physical structure. Such tools can include RFID tags or built-in tool sensors; [0049] One or more states of devices that are collaborating in the task: this can include, for example, the state (s) of the robot 108R, as measured by sensor (s) of the robot 112F; or [0050] One or more states of the surrounding environment in which the task is being performed: this may include, for example, 112A environmental sensors that sense the temperature of station 102 or any element thereof, the humidity of the station, consumption of station energy, or the location of station 102 elements as a function of time. Such environmental sensors can include image sensors, audio sensors, temperature sensors and humidity sensors. [0051] In one example, the step is for user 108 to tighten a nut on a bolt using a tool 110A which is a torque wrench. The torque wrench includes a 112DA torque sensor that senses the amount of torque that is exerted by the 110A tool. In this embodiment, the 112DA torque sensor measures the torque being applied to the physical structure 106 and transmits data from the sensor including the measured torque to the orientation processing unit 104. Such transmission can be performed using wires or wirelessly. In another example, the step is for the user to turn a screw until a microswitch is activated (for example, switched from the off position to a on position). In this case, the sensor transmits a voltage associated with the off position while the screw is turned, and when the screw is turned to the correct position, the Petition 870170090358, of 11/23/2017, p. 142/178 23/44 switch is activated and a voltage associated with the on position is transmitted. In this case, real-time sensor data consisting of a voltage or other voltage is transmitted. [0052] Returning to FIG. 3A, the orientation processing unit 104 computes a performance measure from the sensor data, as shown in block 320. This can be done, for example, by comparing the received sensor data with a limit value and computing a measure of performance from the comparison of received sensor data and limit value. The performance measure can be used to monitor the performance of the stage and / or to verify the performance of the stage, as described below. In the example of the torque wrench being used to tighten a nut on a bolt to a particular torque, the received real-time sensor data is compared with a limit torque value (eg 10 Newton-meter) and a measurement performance is computed from the difference between the sensed torque and the limit torque value. [0053] In one embodiment, the guidance processing unit 104 optionally provides real-time feedback on the progress of the task to user 108. This is illustrated in the dashed blocks of FIG. 3B. Block 322 optionally generates and transmits feedback data according to the comparison of sensor data and limit value. This feedback data is optionally received by the display devices 114 and presented to the user 108, as shown in blocks 324 and 326. The performance data generation and feedback transmission allows the user to receive information about the progress of the step while executing the stage itself. For example, if user 108 tightens a nut on a pin, blocks 322326 can compute a performance measure comprising a difference between the measured torque and a torque requirement or limit, and Petition 870170090358, of 11/23/2017, p. 143/178 24/44 present this difference to the user in terms of a meter, digital display or other means. The feedback can also be auditory (for example, beeping when the correct torque value is reached) or both auditory and visual (for example, showing a visual portrait a comparison between the measured torque and the required torque on a target graph, and auditory feedback with the tone occurring when the proper torque has been reached) or changing the pitch, allowing the user to adjust the torque without looking at visual presentations. [0054] Feedback can also provide a comparison of environmental status with limit values in support of physical or safety considerations. For example, such data and related limit comparisons may include temperature, humidity, the location and / or movement of hazardous devices (for example, a forklift is approaching the user and / or is within a distance or boundary from which an appendix User 108 can be during the execution of one or more of the task steps) and enable or block security devices. Likewise, the collected sensor data can be provided to other elements at other stations 102 or between stations 102 on the MAT 100 to control those other elements to avoid such hazards (for example, transmitting the sensor data to the forklift to alert the operator that a step will be performed and that the forklift must remain at a safe distance. [0055] In block 328, the orientation processing unit 104 determines whether the step has been completed. This can be achieved by determining whether the performance measure computed from the threshold value and the sensor data are within specified tolerances. In one embodiment, this is accomplished by comparing the state of physical structure 106, elements of station 102 or MAT 100 against an expected state of physical structure 106, elements of station 102, such as tools 110 or elements of MAT 100, if the step has been properly completed against the measured or actual state. Petition 870170090358, of 11/23/2017, p. 144/178 25/44 Going back to user example 108 by tightening a nut on a pin, block 328 would indicate that the step of tightening the pin is complete when the performance measure (the difference between the measured torque and the specified required torque, which represents the required value ) is within a tolerance (for example, 0.1 Nm) of the required torque. [0056] If the step is not completed, processing will be forwarded back to block 318 to receive and process other sensor data. The illustrated modality also includes an optional step 330 failure test that determines whether step performance has failed (for example, a problem has arisen that prevents step performance). If the performance of the step has failed, processing is passed to block 306 to determine another step, with the context that the step specified previously failed. If the step performance has not failed, processing is forwarded to block 318 to receive and process additional sensor data as before. The failure of a step can be determined using a comparison between a timer started when instructions for the step are sent (for example, block 308) and the time expected to complete the step. Alternatively, the timer could be started using sensor data to indicate when the actual step was started on physical structure 106. The failure of a step can also be determined according to the failure of a tool 110 required to perform the step, or the failure of the station 102 environment to reach a state necessary for the performance of the step. For example, if the step requires an ambient temperature of 23 degrees Celsius, and the air conditioning or heating in the installation that houses station 102 is unable to reach that value. [0057] If block 328 determines that the step is completed, processing is passed to block 332, which generates and transmits feedback data regarding the performance of the completed step. This feedback data is received and presented to the user 108 by the Petition 870170090358, of 11/23/2017, p. 145/178 26/44 display devices 114 or other means, as shown in blocks 334-336. This feedback data can be used to send confirmation to user 108 that the step has been successfully completed (or unsuccessfully) and is presented after the step has been performed. [0058] Finally, referring to FIG. 3C, block 338 stores the information collected or computed in relation to the performance of the step. This information can include sensor data from some or all 112 sensors and performance measurements. Other information can also be stored, including the time it took for user 108 to perform the required step or task. [0059] Such data can be used to determine how effective and economical the determined steps are in carrying out the required task, and can be compared with other possible steps in carrying out the task. For example, initially, a first set of stations 102 can be provided with a particular sequence of steps to perform a task, while another set of stations 102 can be provided with a different test sequence. The time required to run the test can be determined for each set of stations and compared to a measure of quality in terms of how well the steps were performed, using the sensor data. In this way, two possible step sequences can be compared with real-world results, with the most effective of the two possible step sequences selected for future activity. [0060] The sensor data and performance measures can be used to improve the process of determining the step, illustrated in block 306. For example, experienced users 108 may know that it is not just performing the step, but how the step is performed, allowing them to get the job done more accurately and in less time. Using sensor data, such techniques and additional steps performed by the user 108, even if not in the original definition of the required step, can be identified and integrated Petition 870170090358, of 11/23/2017, p. 146/178 / 44 on how the step (s) should be carried out in the future. In earlier production, for example, an overview of steps can be provided to experienced users, and additional steps or steps skipped by those users can be used to streamline the process of determining the steps needed to perform the task. [0061] This can be done using machine learning techniques. For example, MAT 100 provides instructions to user 108 to perform a particular procedure through a series of steps, and user 108 performs the indicated steps. The MAT 100 can then use user sensor data 108, performing the steps, as well as other performance measures to “learn” (for example, through machine learning) which instructions (or type of instructions) confused the user, and review the instructions as needed to make the instructions more understandable. [0062] This can be determined, for example, from the time elapsed between when the step was presented to user 108 and the user started executing the step, and / or the elapsed time that user 108 took to complete the performance of the step (with excessive time indicating confusion on the part of user 108). The MAT 100 can also use machine learning techniques to modify the steps for additional clarity or other improvements. The sensor data can be used to determine the source of the user's confusion. For example, sensor data for the tools selected for the particular step can confirm or refute the notion that user 108 used the appropriate tools when attempting to perform the step. User 108 can also provide direct input (statements that they are confused or questions asked by user 108 to clarify instructions that they do not consider clear). [0063] Procedures / steps can be modified for all users 108 based on the aggregate of those previous performances of the Petition 870170090358, of 11/23/2017, p. 147/178 28/44 user in the execution of steps, or can be modified on a user-to-user basis, so that steps generated and presented to each user 108 by MAT 100 are customized for each particular user 108 based on that user's previous performance in steps or procedures previously performed. For example, MAT 100 can generate a set of baseline steps that must be performed to complete a specific task. More experienced users or those who completed these tasks quickly can be presented with shortened versions of the instructions, while less experienced users or those who took longer to complete those tasks can be presented with versions of the instructions suggesting how the step could be better or more quickly completed . Such versions can be based, for example, on sensor data compiled from other users 108, who completed the assigned steps or tasks more quickly. This allows the experience of all users who perform the task to be quickly shared with more inexperienced users. In addition, machine learning techniques and sensor data can be used to estimate the user's experience and expertise, and the MAT 100 can provide instructions compatible with that experience. For example, different sets of step instructions can be generated to perform the same tasks, and MAT 100 can decide which set of instructions to provide user 108, depending on a professed or estimated level of experience of the user. [0064] Machine learning techniques can also be used to diagnose and solve problems using sensor data collected in the performance of the steps. The sensor data from the production or maintenance of physical structure 106 can be examined to try to correlate these failures with how one or more of the steps performed in assembling or maintaining the product using data mining or machine learning techniques. For example, a set of products Petition 870170090358, of 11/23/2017, p. 148/178 29/44 can be found to have a particular failure (for example, failure of a pin). Data mining techniques can be used to analyze the sensor data collected in the production or maintenance of the physical structure 106 and try to correlate patterns with those failed physical structures. In one example, this analysis could conclude that each failed bolt received higher torque than failed bolts, increasing the possibility that the torque specification is incorrect and should be changed. In a more complex example the analysis may reveal that it was not the torque applied to the pin, but rather a tightening pattern or a failure in a related part. [0065] Finally, block 340 determines whether the task is complete (for example, additional steps are necessary to complete the task). In one embodiment, this is accomplished by comparing the state of physical structure 106, elements of station 102 or MAT 100 against an expected state of physical structure 106, elements of station 102 such as tools 110 or elements of MAT 100 if the task has been properly completed against the measured or real state. If additional steps are required, processing is forwarded to block 306, which determines the next step, and processing continues as described above. If no additional steps are required, block 344 directs processing to the next task (if any). In one embodiment, this is accomplished by comparing the state of the elements of station 102 or MAT 100 against an expected state of station 102 or MAT 100 if the steps and tasks have been properly completed. The process can reduce errors, rework, safety risks and delays in maintenance, repair and service of aircraft or other products where complex maintenance operations are performed. The process can also reduce training and lower execution costs for maintenance, repair and service of aircraft or other products where complex maintenance operations are performed. Petition 870170090358, of 11/23/2017, p. 149/178 30/44 [0066] Task performance data (for example, elapsed time or duration to perform the entire task or other measures of task performance) can be optionally generated and stored, as shown in block 342. [0067] Task or step performance data, together with sensor data used to generate task or step performance data, can be generated in real time and transmitted to other MAT 100 elements for real time digital documentation purposes. step performance, and used for archiving and / or optimization purposes, use of parts for supply chain updates, audit records and maintenance records. [0068] FIG. 4 is a diagram illustrating the operation of the guidance processing unit 104 with other elements of stations 102 and central processor 120. In this embodiment, guidance processing unit 104 is implemented using an Internet of Things (IoT) connection point ) 402 communicatively connected to interface 206 to receive data from, and transmit data to, sensors 112 and effectors (e.g., tools 110). [0069] Using sensor interface 206, connection point 402 collects data from sensors 112 and tools 110 and provides this information for processing and / or storage on one or more public servers 404A-404N (hereinafter ( es) public 404 (s) and / or one or more private servers 406 in MAT 100. Public servers 404 are cloud-based processing and storage devices located “in the cloud” (for example, away from station 102 or MAT 100 and generally managed by another entity). The private server 406 is a data processing and storage device that is usually arranged with station 102 and / or MAT 100 and is managed by the same entity. Petition 870170090358, of 11/23/2017, p. 150/178 31/44 [0070] Connection point 402 also receives commands and data from public server (s) 404 and private server 406 and provides these commands for interface 206 and then for sensor (s) 112 and tool (s) 110 as needed. Public servers 404 and private server 406 also provide instructional data illustrating the step (s) for display devices (204) such as display 114A or an audio playback device. The machine learning / processing module 408 accesses instruction data and can modify instruction data based on data from previous instructions, as well as data from sensor 112 and tool 110. [0071] FIGS. 5A and 5B are diagrams illustrating an exemplary test bed implementation of guidance processing unit 104 and related elements of station 102. In this embodiment, sensors 112 include a push button 502 (for example, a GROVE button, which responds to a momentary pulse outputting a digital voltage signal with a high logic signal and outputting a low logic signal when released) and a 504 potentiometer (which provides an analog voltage signal). Effectors or tools 110 include, for example, a light emitting diode 506 that receives a digital command, a stepper motor 508 that receives a digital command and / or a liquid crystal display 510 that receives an interintegrated circuit command (I2C ). Interface 206 can be implemented by a modular processor device 514 (e.g., an ARDUINO processor) communicatively coupled to an input / output (I / O) card, such as a GROVE 512 base shield. [0072] Furthermore, in this modality, the IoT connection point 402 is implemented by a 519 port processor that operates with an operating system (SO) 518, such as RASPBIAN running an IoT 516 programming tool such as NODE-RED. The IoT 402 connection point implements open source real-time messages (MQTT) over WiFi and Petition 870170090358, of 11/23/2017, p. 151/178 32/44 can send information to and from any combination of other IoT 402 ports. [0073] The IoT 402 connection point communicates with one or more 404A-404N public servers (collectively referred to hereafter as public 404 server (s) and / or one or more 406 private servers. Each 404A public server -404N includes a respective 520A-520N cognitive system, respectively, as well as a respective 528A-528N cloud platform each implementing a software integration tool that provides hardware device intercommunication. Each 520 cognitive system combines artificial intelligence (AI) and analytical software to produce a system that can answer questions. [0074] In one embodiment, each public 404 server can be implemented using different software constructions by different vendors. For example, the 520A cognitive system may include the IBM Watson IoT platform that operates on a BLUEMIX 528A cloud platform running a NODE-RED 522 A software integration tool, while the 520N cognitive system may comprise a MICROSOFT AZURE IoT hub , operating with an AZURE 528N application service running a JAVA 522N application integration tool. [0075] The 404 and 406 servers communicate securely with a visualization tool 536 running an augmented reality (AR) application 534 through a compliant state transfer (API) 532 application program interface (REST) ) and provides API security to ensure that only authorized entities can access the system. Compliance with architectural restrictions REST places limits on the interaction between the elements, standardizing the syntax and means by which the elements communicate with each other. The result is that architectural elements are essentially “pluggable”, allowing one version of an architectural element to be replaced by another version Petition 870170090358, of 11/23/2017, p. 152/178 33/44 without any significant change in operation or change in other architectural elements. In one embodiment, the visualization tool is implemented by an AR authoring tool, such as HOLOLENS, available from MICROSOFT CORPORATION. Using this 536 visualization tool, IoT data can be viewed on mobile devices, using REST-based APIs to provide web-based access to the public 404 server (s) and / or private server (s) ) 406. In one embodiment, the private server 406 can be implemented using the Message Queue Telemetry Transport Broker (MQTT) 524 and the operating system 530 running a NODE-RED 526 software integration tool. Hardware environment [0076] FIG. 6 illustrates an exemplary computer system 600 that could be used to implement processing elements from the above description, including the orientation processing unit 104, central processor 120, display devices 114 and / or sensors 112. A computer 602 comprises one or more 604A and 604B processors and a memory, such as 606 random access memory (RAM). The processor (s) may include a 604A general purpose processor and / or a 604B special purpose processor. General purpose 604A processors generally do not require any specific computer language or software, and are designed to perform general processing operations, but can be programmed for special applications. 604B special-purpose processors may require specific computer language or software, may implement some functions in hardware, and are typically optimized for a specific application. Computer 602 is operationally coupled to a display 622, which presents images such as windows to the user in a graphical user interface (GUI). The 602 computer can be coupled with other devices, such as a Petition 870170090358, of 11/23/2017, p. 153/178 34/44 keyboard 614, a mouse device 616, a printer 628, etc. Naturally those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals and other devices, can be used with the 602 computer. [0077] Generally, computer 602 operates under the control of an operating system 608 stored in memory 606, and interfaces with the user to accept inputs and commands and present results through a GUI 618A module. Although the 618B GUI module is portrayed as a separate module, the instructions that perform the GUI functions can be resident or distributed in the 608 operating system, in the 610 computer program, or implemented with special purpose processors and memory. Computer 602 also implements a compiler 612 that allows a computer application program 610 to be written in a programming language such as C ++, C #, Java, Python or another language to be translated into the readable code of the 604 processor. Upon completion, computer program 610 accesses and manipulates data stored in memory 606 of computer 602 using the interfaces and logic that were generated using compiler 612. Computer 602 also optionally comprises an external communication device, such as a modem , a satellite link, Ethernet card or other device to communicate with other computers. [0078] In one embodiment, instructions that implement the operating system 608, computer program 610 and compiler 612 are tangibly configured on a computer-readable medium, for example, data storage device 624, which could include a or more fixed or removable data storage devices, such as a hard drive, a CD-ROM drive, a flash drive, etc. In addition, operating system 608 and computer program 610 Petition 870170090358, of 11/23/2017, p. 154/178 35/44 consist of instructions that, when read and executed by computer 602, cause computer 602 to perform the operations described therein. The computer program 610 and / or operating instructions can also be tangibly configured in memory 606 and / or data communication devices 630, thereby making a computer program product or article of manufacture. Accordingly, the terms "article of manufacture", "program storage device" and "computer program product", as used here, are intended to cover a computer program accessible from any computer-readable device or media. [0079] Those skilled in the art will recognize that many modifications can be made to this configuration without departing from the scope of the present description. For example, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals and other devices, can be used. Conclusion [0080] This concludes the description of the preferred embodiments of the present description. [0081] The above describes a cognitive assistant that allows a maintainer to speak to an application using Natural Language. The maintainer can quickly interact with a hands-free application without the need to use complex user interfaces or memorized voice commands. The assistant provides instructions to the maintainer using augmented reality visual and audio cues. The assistant will guide the maintainer through maintenance tasks and verify proper execution using IoT sensors. If after completing a step, the IoT sensors are not as expected, the maintainer is notified of how to resolve the situation. [0082] The preceding description of the preferred modality was Petition 870170090358, of 11/23/2017, p. 155/178 36/44 presented for the purpose of illustration and description. It is not intended to be exhaustive or to limit the description to the precise form described. Many modifications and variations are possible in light of the above teaching. The scope of the rights is intended to be limited not by this detailed description, but instead by the attached claims. [0083] In addition, the description includes modalities according to the following clauses: Clause 1. A method for providing guidance for performing a task with at least one step performed on a physical structure at a station, comprising: (a) receiving, in a guidance processing unit, a command from a performance entity, the command invoking the task; (b) determining, in the guidance processing unit, at least one step from the command; (c) transmit, from the guidance processing unit to the performance entity, instructional data illustrating the performance of at least one step; (d) receiving, in the guidance processing unit, real-time sensor data generated by a sensor close to the physical structure that senses the performance of the stage; and (e) compute a step performance measure according to the sensor data in real time. Clause 2. The method of Clause 1, in which to compute a step performance measure comprises: compare the sensor data in real time with a limit value; compute the performance measure according to the comparison; and Petition 870170090358, of 11/23/2017, p. 156/178 / 44 verify the performance of the stage according to the performance measure. Clause 3. The Clause 2 method also comprises: generating feedback data according to the comparison; and transmit, from the guidance processing unit, the feedback data for presentation at the station. Clause 4. The method of Clause 2, in which the calculation of the step performance measure according to the real-time sensor data comprises: generate feedback data using real-time sensor data; and display the feedback data at the station. Clause 5. The Clause 4 method, in which: the feedback data is presented simultaneously with the performance of the stage; the method further comprises: repeat steps (c) and (d) until the step is completed. Clause 6. The method of Clause 4, in which the feedback data is presented after the performance of the step. Clause 7. The method of any preceding clause, further comprising: compute, by the guidance processing unit, a performance measure for the step performance from the real-time sensor data; and store the performance measure. Clause 8. The method of Clause 7 above, in which: the performance measure is an elapsed time to perform the step, the elapsed time computed from a threshold value and the real-time sensor data. Clause 9. The method of any preceding clause, Petition 870170090358, of 11/23/2017, p. 157/178 38/44 further comprising: store sensor data in real time by the guidance processing unit; and compare real-time sensor data with other real-time sensor data by sensing other step performance in another physical structure. Clause 10. The method of any preceding clause, further comprising: determine the task from the command received; and determine at least one step of the given task. Clause 11.0 Clause 10 method, in which: determining the task from the received command comprises: generating a database query from the received command using a natural language interpreter; query a database according to the database query to determine the task; and determining at least one step from the given task comprises: determine at least one step of the given task. Clause 12. The Clause 11 method, in which: the task is one of a plurality of tasks performed in the physical structure; the query of the database is further determined according to the current context data, including: information about another of the plurality of tasks performed in the physical structure; and task restrictions imposed by at least one of a physical structure and a station environment. Clause 13. The method of any preceding clause, in the Petition 870170090358, of 11/23/2017, p. 158/178 39/44 which instructional data illustrating the performance of at least one stage comprises a visual representation of the stage for presentation in augmented reality through an augmented reality headset. Clause 14. The method of any preceding clause, in which real-time sensor data describes a state of the physical structure. Clause 15. The method of Clause 14, in which the sensor is a visible sensor that observes the performance of the step and the real-time sensor data comprises video data. Clause 16. The method of Clause 14, in which the sensor is arranged in the physical structure on which the step is performed. Clause 17. The method of any preceding clause, in which the real-time sensor data describes a state of a tool used to perform the step. Clause 18. The method of Clause 17, in which the sensor is a visible sensor that observes the performance of the step and the real-time sensor data comprises video data. Clause 19. The method of Clause 17, in which the sensor is arranged on the tool used to perform the step. Clause 20. The method of any preceding clause in which the real-time sensor data describes a state of devices that collaborate in the task. Clause 21. The method of any preceding clause, in which the data describes a state of an environment in which the task is performed. Clause 22. The method of any preceding clause, in which the command is a user's hands-free command. Clause 23. A system to provide guidance for performing a task with at least one step performed on a physical and Petition 870170090358, of 11/23/2017, p. 159/178 40/44 at a station, comprising: an orientation processing unit, the orientation processing unit comprising a processor communicatively coupled to a memory that stores instructions, which comprises instructions for: receive a command from a performance entity, the command that invokes the task; determine at least one step of the command; transmit, instructional data illustrating the performance of at least one stage for the performance entity; receive real-time sensor data generated by a sensor close to the physical structure that senses step performance, and compute a step performance measure according to real-time sensor data. Clause 24. The Clause 23 system, in which the instructions for computing a step performance measure according to the real-time sensor data comprise instructions for: compare the sensor data in real time with a limit value; compute the performance measure according to the comparison; and verify the performance of the stage according to the performance measure. Clause 25. The Clause 24 system, in which the instructions further comprise instructions for: generate feedback data according to the comparison; and transmit, from the guidance processing unit, the feedback data for presentation at the station. Clause 26. The Clause 24 system, in which instructions Petition 870170090358, of 11/23/2017, p. 160/178 41/44 to compute the step performance measurement according to the real-time sensor data comprises instructions for: generate feedback data using real-time sensor data; and display the feedback data at the station. Clause 27. The Clause 26 system, in which the feedback data is presented simultaneously with the performance of the stage; and the instructions further comprise instructions for repeating steps (c) and (d) until the step is completed. Clause 28. The Clause 26 system, in which the feedback data is presented after the performance of the step. Clause 29. The system of any of clauses 23-28, in which the instructions further comprise instructions for: compute a performance measure for stage performance from real-time sensor data; and store the performance measure. Clause 30. The system of Clause 29 above, in which: the performance measure is an elapsed time to execute the step, the elapsed time calculated from a limit value and the real-time sensor data. Clause 31. The system of any of clauses 23-30, in which the instructions further comprise instructions for: store sensor data in real time by the guidance processing unit; and compare the sensor data in real time with other sensor data in real time, detecting other performance of the step in another physical structure. Clause 32. The system of any of clauses 23-31, Petition 870170090358, of 11/23/2017, p. 161/178 42/44 in which the instructions also include instructions for: determine the task from the command received; and determine at least one step of the given task. Clause 33. The Clause 32 system, in which: instructions for determining the task from the command received include instructions for: generate a database query from the command received using a natural language interpreter; and query a database according to the database query to determine the task; and instructions for determining at least one step of the given task include instructions for: determine at least one step of the given task. Clause 34. The Clause 33 system, in which: the task is one of a plurality of tasks performed in the physical structure; and the database query is further determined according to the current context data, including: information about another of the plurality of tasks performed in the physical structure; and restrictions on the task imposed by at least one of a physical structure and a station environment. Clause 35. The system of any of clauses 23-34, in which the instructional data illustrating the performance of at least one stage comprises a visual representation of the stage for presentation in augmented reality through an augmented reality headset. Clause 36. The system of any of clauses 23-35, in which the real-time sensor data describes a state of the physical structure. Petition 870170090358, of 11/23/2017, p. 162/178 43/44 Clause 37. The Clause 36 system, in which the sensor is a visible sensor that observes the performance of the step and the sensor data in real time comprises video data. Clause 38. The system of Clause 36, in which the sensor is arranged in the physical structure on which the step is performed. Clause 39. The system of any of clauses 23-38, in which the real-time sensor data describes a state of a tool used to perform the step. Clause 40. The Clause 39 system, in which the sensor is a visible sensor that observes the performance of the step and the sensor data in real time comprises video data. Clause 41. The system of Clause 39, in which the sensor is placed on the tool used to perform the step. Clause 42. The system of any of clauses 23-41, in which the data describes a state of devices that collaborate in the task. Clause 43. The system of any of clauses 23-42, in which the data describes a state of an environment in which the task is performed. Clause 44. The system of any of clauses 23-43, in which the command is a user's hands-free command. Clause 45. A system to provide guidance for performing a task with at least one step performed on a physical structure at a station, comprising: a sensor close to the physical structure; a presentation device; and an orientation processing unit, the orientation processing unit comprising a processor communicatively coupled to a memory that stores instructions it comprises Petition 870170090358, of 11/23/2017, p. 163/178 44/44 instructions for: receiving a command from a performance entity, the command invoking the task; determine at least one step of the command; transmit instructional data illustrating the performance of at least one step to the performance entity for presentation by the presentation device; receive real-time sensor data generated by the sensor close to the physical structure that senses step performance; and compute a step performance measure according to real-time sensor data. Petition 870170090358, of 11/23/2017, p. 164/178 1/4
权利要求:
Claims (15) [1] 1. Method for providing guidance to perform a task with at least one step performed on a physical structure (106) at a station (102), characterized by the fact that it comprises: (a) receiving, in a guidance processing unit (104), a command from a performance entity (104), the command invoking the task; (b) determining, at the guidance processing unit (104), at least one step of the command; (c) transmitting, from the guidance processing unit (104) to the performance entity (104), instructional data illustrating the performance of at least one step; (d) receiving, in the orientation processing unit (104), real-time sensor data (112) generated by a sensor (112) close to the physical structure (106) that senses the performance of the stage; and (e) computing a step performance measure according to the sensor data (112) in real time. [2] 2. Method according to claim 1, characterized by the fact that computing a step performance measure comprises: comparing the sensor data (112) in real time with a limit value; compute the performance measure according to the comparison; and verify the performance of the stage according to the performance measure. [3] 3. Method according to claim 2, characterized by the fact that it additionally comprises: generate feedback data according to the comparison; and transmit, from the guidance processing unit Petition 870170090358, of 11/23/2017, p. 165/178 (104), the feedback data for presentation at the station (102). [4] 4. Method according to claim 2, characterized by the fact that the computation of the performance measurement of the step according to the sensor data (112) in real time comprises: generate feedback data using the sensor data (112) in real time; and display the feedback data at the station (102). [5] 5. Method according to claim 4, characterized by the fact that the feedback data are presented simultaneously with the performance of the stage; the method additionally comprises: repeat steps (c) and (d) until the step is completed. [6] 6. Method according to any of the preceding claims, characterized by the fact that it additionally comprises: compute, by the guidance processing unit (104), a performance measure for the performance of the step from the sensor data (112) in real time; and store the performance measure. [7] 7. Method according to claim 6 above, characterized by the fact that the performance measure is an elapsed time to execute the step, the elapsed time computed from a limit value and the sensor data (112) in real time . [8] 8. Method according to any of the preceding claims, characterized by the fact that it additionally comprises: store, by the guidance processing unit (104), the sensor data (112) in real time; and compare real-time sensor data (112) with other real-time sensor data (112) that detect other Petition 870170090358, of 11/23/2017, p. 166/178 3/4 step in another physical structure (106). [9] 9. Method according to any of the preceding claims, characterized by the fact that it additionally comprises: determine the task from the command received; and determine at least one step of the given task. [10] 10. Method according to claim 9, characterized by the fact that: determines the task from the command received understand: generates a database query from the command received using a natural language interpreter; query a database according to the database query to determine the task; and determines at least one step of the task determined to understand: determines at least one step of the given task. [11] 11. Method according to claim 10, characterized by the fact that: the task is one of a plurality of tasks performed on the physical structure (106); the database query is additionally determined according to the current context data, including: information about the other of the plurality of tasks performed in the physical structure (106); and restrictions on the task imposed by at least one of a physical structure (106) and a station environment (102). [12] 12. Method according to any of the preceding claims, characterized in that the instructional data illustrating the performance of at least one stage comprises a visual representation of the stage for presentation in augmented reality via a headset Petition 870170090358, of 11/23/2017, p. 167/178 4/4 augmented reality ear (114B), in which real-time sensor data (112) optionally describes a state of the physical structure (106), and in which real-time sensor (112) data optionally describes a state of a tool (110) used to perform the step. [13] 13. Method according to claim 12, characterized in that the sensor (112) is a visible sensor that observes the performance of the step, and the sensor data (112) in real time comprises video data, and in which the sensor (112) is optionally disposed on the tool (110) used to perform the step. [14] 14. Method according to any of the preceding claims, characterized by the fact that the sensor data (112) in real time describes a state of devices collaborating in the task, in which the data optionally describes a state of an environment in which the task is performed, and where the remote is optionally a hands-free remote from a user. [15] 15. System to provide guidance to perform a task with at least one step performed on a physical structure (106) at a station (102), characterized by the fact that it comprises: an orientation processing unit (104), the orientation processing unit (104) comprising a processor (604) communicatively coupled to a memory (606) which stores instructions comprising instructions for executing the method as defined in any one claims 1 to 14. Petition 870170090358, of 11/23/2017, p. 168/178 1/9
类似技术:
公开号 | 公开日 | 专利标题 BR102017025093A2|2018-06-26|METHOD AND SYSTEM TO PROVIDE GUIDANCE FOR CARRYING OUT A TASK CN107491061B|2019-09-03|A kind of the network automatically test macro and its method of commercial vehicle OBD diagnostic device US20130010068A1|2013-01-10|Augmented reality system ES2206224T3|2004-05-16|SYSTEM AND PROCEDURE FOR THE DOCUMENTATION PROCESSING WITH STRUCTURATION OF MULTIPLE LAYERS OF INFORMATION, ESPECIALLY FOR TECHNICAL AND INDUSTRIAL APPLICATIONS. US10026227B2|2018-07-17|Portable augmented reality CA2864166C|2019-09-03|An instructional system with eye-tracking-based adaptive scaffolding WO2018036554A1|2018-03-01|Apparatus fault detection system, and fault detection device JP6875064B2|2021-05-19|Methods, computer-readable recording media and systems for controlling the production of multi-component products in a production environment BR102013002439A2|2015-08-11|Aircraft maintenance system, method of providing maintenance for an aircraft and method of operating an aircraft BR102016025471A2|2017-07-18|AIRCRAFT EVALUATION SYSTEM, AND METHOD FOR DETERMINING INFORMATION ABOUT AN AIRCRAFT AU2014238181B2|2019-09-12|System and method for providing a user interface to remotely control medical devices Szafir2019|Mediating human-robot interactions with virtual, augmented, and mixed reality WO2019144222A1|2019-08-01|Systems and methods for maintaining vehicle state information Marinho et al.2020|SmartArm: Integration and validation of a versatile surgical robotic system for constrained workspaces US20140107996A1|2014-04-17|Enabling reuse of unit-specific simulation irritation in multiple environments US9395169B2|2016-07-19|Multi-axis type three-dimensional measuring apparatus RU2767145C2|2022-03-16|System and method for interactive help with cognitive means when solving tasks BR112019020497A2|2020-08-04|method for continuous monitoring of a model in an interactive computer simulation station and interactive computer simulation station for its execution CN107422686A|2017-12-01|Equipment for allowing the remote control to one or more devices Perrone et al.2014|Ontology-based modular architecture for surgical autonomous robots Fischer et al.2011|A technique with manipulator-assisted endoscope guidance for functional endoscopic sinus surgery: proof of concept BR102019016776A2|2020-03-10|METHOD TO PERFORM AUTOMATED SUPERVISION AND INSPECTION OF AN ASSEMBLY PROCESS, AND, SYSTEM FOR AUTOMATED SUPERVISION AND INSPECTION OF AN ASSEMBLY PROCESS. BR102019025555A2|2020-06-16|COMPUTER SYSTEM AND METHOD OF IMPROVING THE INTEGRITY OF A COMPUTER SYSTEM CN110977980A|2020-04-10|Mechanical arm real-time hand-eye calibration method and system based on optical position indicator BR102019013390A2|2020-01-28|augmented reality system
同族专利:
公开号 | 公开日 JP2018160232A|2018-10-11| RU2017136560A3|2021-02-10| RU2017136560A|2019-04-17| CA2982575A1|2018-06-09| EP3333654A1|2018-06-13| CN108228345A|2018-06-29| US20180165978A1|2018-06-14| KR20180066823A|2018-06-19| EP3333654B1|2022-03-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 ES2731928T3|2011-02-11|2019-11-19|Ops Solutions Llc|Light guided assembly system and procedure| US20130010068A1|2011-04-12|2013-01-10|Radiation Monitoring Devices, Inc.|Augmented reality system| US10573037B2|2012-12-20|2020-02-25|Sri International|Method and apparatus for mentoring via an augmented reality assistant|JP6995770B2|2016-06-15|2022-01-17|アイロボット・コーポレーション|Systems and methods for controlling autonomous mobile robots| TWI659659B|2017-09-06|2019-05-11|國立臺灣大學|Vehicle information and environment monitoring compound vehicle system and data processing and transmission method therein| CN108363556A|2018-01-30|2018-08-03|百度在线网络技术(北京)有限公司|A kind of method and system based on voice Yu augmented reality environmental interaction| US10862971B2|2018-04-27|2020-12-08|EMC IP Holding Company LLC|Internet of things gateway service for a cloud foundry platform| US11036984B1|2018-06-08|2021-06-15|Facebook, Inc.|Interactive instructions| US11200811B2|2018-08-03|2021-12-14|International Business Machines Corporation|Intelligent recommendation of guidance instructions| DE102018213208A1|2018-08-07|2020-02-13|MTU Aero Engines AG|Procedure for performing maintenance on a complex assembly assembly| GB201817061D0|2018-10-19|2018-12-05|Sintef Tto As|Manufacturing assistance system| US10417497B1|2018-11-09|2019-09-17|Qwake Technologies|Cognitive load reducing platform for first responders| US10896492B2|2018-11-09|2021-01-19|Qwake Technologies, Llc|Cognitive load reducing platform having image edge enhancement| US10854007B2|2018-12-03|2020-12-01|Microsoft Technology Licensing, Llc|Space models for mixed reality| CN110011848B|2019-04-03|2020-07-31|南方电网数字电网研究院有限公司|Mobile operation and maintenance auditing system| KR102104326B1|2019-06-28|2020-04-27|한화시스템 주식회사|Maintenance training system and method based on augmented reality| TWI738214B|2020-02-13|2021-09-01|國立虎尾科技大學|Many-to-many state recognition system for names of Internet of things broadcasting equipment|
法律状态:
2018-06-26| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201662432495P| true| 2016-12-09|2016-12-09| US62/432,495|2016-12-09| US15/649,382|2017-07-13| US15/649,382|US20180165978A1|2016-12-09|2017-07-13|System and method for interactive cognitive task assistance| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|